10 research outputs found

    Concepts for handling heterogeneous data transformation logic and their integration with TraDE middleware

    Get PDF
    The concept of programming-in-the-Large became a substantial part of modern computerbased scientific research with an advent of web services and the concept of orchestration languages. While the notions of workflows and service choreographies help to reduce the complexity by providing means to support the communication between involved participants, the process still remains generally complex. The TraDE Middleware and underlying concepts were introduced in order to provide means for performing the modeled data exchange across choreography participants in a transparent and automated fashion. However, in order to achieve both transparency and automation, the TraDE Middleware must be capable of transforming the data along its path. The data transformation’s transparency can be difficult to achieve due to various factors including the diversity of required execution environments and complicated configuration processes as well as the heterogeneity of data transformation software which results in tedious integration processes often involving the manual wrapping of software. Having a method of handling data transformation applications in a standardized manner can help to simplify the process of modeling and executing scientific service choreographies with the TraDE concepts applied. In this master thesis we analyze various aspects of this problem and conceptualize an extensible framework for handling the data transformation applications. The resulting prototypical implementation of the presented framework provides means to address data transformation applications in a standardized manner

    From Serverful to Serverless: A Spectrum of Patterns for Hosting Application Components

    No full text
    The diversity of available cloud service models yields multiple hosting variants for application components. Moreover, the overall trend of reducing control over the infrastructure and scaling configuration makes it non-trivial to decide which hosting variant suits more a certain software component. In this work, we introduce a spectrum of component hosting patterns that covers various combinations of management responsibilities related to (i) the deployment stack required by a given component as well as (ii) required infrastructure resources and component’s scaling rules. We validate the presented patterns by identifying and showing at least three real world occurrences of each pattern following the well-known Rule of Three

    Selection and Optimization of Hyperparameters in Warm-Started Quantum Optimization for the MaxCut Problem

    No full text
    Today’s quantum computers are limited in their capabilities, e.g., the size of executable quantum circuits. The Quantum Approximate Optimization Algorithm (QAOA) addresses these limitations and is, therefore, a promising candidate for achieving a near-term quantum advantage. Warm-starting can further improve QAOA by utilizing classically pre-computed approximations to achieve better solutions at a small circuit depth. However, warm-starting requirements often depend on the quantum algorithm and problem at hand. Warm-started QAOA (WS-QAOA) requires developers to understand how to select approach-specific hyperparameter values that tune the embedding of classically pre-computed approximations. In this paper, we address the problem of hyperparameter selection in WS-QAOA for the maximum cut problem using the classical Goemans–Williamson algorithm for pre-computations. The contributions of this work are as follows: We implement and run a set of experiments to determine how different hyperparameter settings influence the solution quality. In particular, we (i) analyze how the regularization parameter that tunes the bias of the warm-started quantum algorithm towards the pre-computed solution can be selected and optimized, (ii) compare three distinct optimization strategies, and (iii) evaluate five objective functions for the classical optimization, two of which we introduce specifically for our scenario. The experimental results provide insights on efficient selection of the regularization parameter, optimization strategy, and objective function and, thus, support developers in setting up one of the central algorithms of contemporary and near-term quantum computing

    Configurable Readout Error Mitigation in Quantum Workflows

    No full text
    Current quantum computers are still error-prone, with measurement errors being one of the factors limiting the scalability of quantum devices. To reduce their impact, a variety of readout error mitigation methods, mostly relying on classical post-processing, have been developed. However, the application of these methods is complicated by their heterogeneity and a lack of information regarding their functionality, configuration, and integration. To facilitate their use, we provide an overview of existing methods, and evaluate general and method-specific configuration options. Quantum applications comprise many classical pre- and post-processing tasks, including readout error mitigation. Automation can facilitate the execution of these often complex tasks, as their manual execution is time-consuming and error-prone. Workflow technology is a promising candidate for the orchestration of heterogeneous tasks, offering advantages such as reliability, robustness, and monitoring capabilities. In this paper, we present an approach to abstractly model quantum workflows comprising configurable readout error mitigation tasks. Based on the method configuration, these workflows can then be automatically refined into executable workflow models. To validate the feasibility of our approach, we provide a prototypical implementation and demonstrate it in a case study from the quantum humanities domain

    Configurable Readout Error Mitigation in Quantum Workflows

    No full text
    Current quantum computers are still error-prone, with measurement errors being one of the factors limiting the scalability of quantum devices. To reduce their impact, a variety of readout error mitigation methods, mostly relying on classical post-processing, have been developed. However, the application of these methods is complicated by their heterogeneity and a lack of information regarding their functionality, configuration, and integration. To facilitate their use, we provide an overview of existing methods, and evaluate general and method-specific configuration options. Quantum applications comprise many classical pre- and post-processing tasks, including readout error mitigation. Automation can facilitate the execution of these often complex tasks, as their manual execution is time-consuming and error-prone. Workflow technology is a promising candidate for the orchestration of heterogeneous tasks, offering advantages such as reliability, robustness, and monitoring capabilities. In this paper, we present an approach to abstractly model quantum workflows comprising configurable readout error mitigation tasks. Based on the method configuration, these workflows can then be automatically refined into executable workflow models. To validate the feasibility of our approach, we provide a prototypical implementation and demonstrate it in a case study from the quantum humanities domain

    TOSCA Light: Bridging the Gap between the TOSCA Specification and Production-ready Deployment Technologies

    No full text
    Prototype and proof-of-concept implementation of the TOSCA Light toolchain (+ the modeling repository using in the validation)
    corecore